9 research outputs found

    Capture-based Automated Test Input Generation

    Get PDF
    Testing object-oriented software is critical because object-oriented languages have been commonly used in developing modern software systems. Many efficient test input generation techniques for object-oriented software have been proposed; however, state-of-the-art algorithms yield very low code coverage (e.g., less than 50%) on large-scale software. Therefore, one important and yet challenging problem is to generate desirable input objects for receivers and arguments that can achieve high code coverage (such as branch coverage) or help reveal bugs. Desirable objects help tests exercise the new parts of the code. However, generating desirable objects has been a significant challenge for automated test input generation tools, partly because the search space for such desirable objects is huge. To address this significant challenge, we propose a novel approach called Capture-based Automated Test Input Generation for Objected-Oriented Unit Testing (CAPTIG). The contributions of this proposed research are the following. First, CAPTIG enhances method-sequence generation techniques. Our approach intro-duces a set of new algorithms for guided input and method selection that increase code coverage. In addition, CAPTIG efficently reduces the amount of generated input. Second, CAPTIG captures objects dynamically from program execution during either system testing or real use. These captured inputs can support existing automated test input generation tools, such as a random testing tool called Randoop, to achieve higher code coverage. Third, CAPTIG statically analyzes the observed branches that had not been covered and attempts to exercise them by mutating existing inputs, based on the weakest precon-dition analysis. This technique also contributes to achieve higher code coverage. Fourth, CAPTIG can be used to reproduce software crashes, based on crash stack trace. This feature can considerably reduce cost for analyzing and removing causes of the crashes. In addition, each CAPTIG technique can be independently applied to leverage existing testing techniques. We anticipate our approach can achieve higher code coverage with a reduced duration of time with smaller amount of test input. To evaluate this new approach, we performed experiments with well-known large-scale open-source software and discovered our approach can help achieve higher code coverage with fewer amounts of time and test inputs

    Capture-based Automated Test Input Generation

    Get PDF
    Testing object-oriented software is critical because object-oriented languages have been commonly used in developing modern software systems. Many efficient test input generation techniques for object-oriented software have been proposed; however, state-of-the-art algorithms yield very low code coverage (e.g., less than 50%) on large-scale software. Therefore, one important and yet challenging problem is to generate desirable input objects for receivers and arguments that can achieve high code coverage (such as branch coverage) or help reveal bugs. Desirable objects help tests exercise the new parts of the code. However, generating desirable objects has been a significant challenge for automated test input generation tools, partly because the search space for such desirable objects is huge. To address this significant challenge, we propose a novel approach called Capture-based Automated Test Input Generation for Objected-Oriented Unit Testing (CAPTIG). The contributions of this proposed research are the following. First, CAPTIG enhances method-sequence generation techniques. Our approach intro-duces a set of new algorithms for guided input and method selection that increase code coverage. In addition, CAPTIG efficently reduces the amount of generated input. Second, CAPTIG captures objects dynamically from program execution during either system testing or real use. These captured inputs can support existing automated test input generation tools, such as a random testing tool called Randoop, to achieve higher code coverage. Third, CAPTIG statically analyzes the observed branches that had not been covered and attempts to exercise them by mutating existing inputs, based on the weakest precon-dition analysis. This technique also contributes to achieve higher code coverage. Fourth, CAPTIG can be used to reproduce software crashes, based on crash stack trace. This feature can considerably reduce cost for analyzing and removing causes of the crashes. In addition, each CAPTIG technique can be independently applied to leverage existing testing techniques. We anticipate our approach can achieve higher code coverage with a reduced duration of time with smaller amount of test input. To evaluate this new approach, we performed experiments with well-known large-scale open-source software and discovered our approach can help achieve higher code coverage with fewer amounts of time and test inputs.</p

    Practical extensions of a randomized testing tool

    No full text
    Many efficient random testing algorithms for object-oriented software have been proposed due to their simplicity and reasonable code coverage; however, even the stateof- the-art random test algorithms yield very low code coverage (around 22%) on large-scale software. We propose four testing techniques to improve test coverage. The proposed techniques are pluggable to any existing random testing techniques for object-oriented software. We incorporated our techniques to a state-of-the-art random testing tool and tested large-scale software, including Java Collections, Apache Ant, and ASM. Our experimental study shows that the proposed techniques increase at most 21% of branch coverage - a significant improvement. © 2009 IEEE

    Practical Extensions of a Randomized Testing Tool

    Get PDF
    Many efficient random testing algorithms for object-oriented software have been proposed due to their simplicity and reasonable code coverage; however, even the state-of-the-art random test algorithms yield very low code coverage (around 22%) on large-scale software. We propose four testing techniques to improve test coverage. The proposed techniques are pluggable to any existing random testing techniques for object-oriented software. We incorporated our techniques to a state-of-the-art random testing tool and tested large-scale software, including Java Collections, Apache Ant, and ASM. Our experimental study shows that the proposed techniques increase at most 21% of branch coverage – a significant improvement.This is a manuscript of a proceeding published as H. Jaygarl, C. K. Chang and S. Kim, "Practical Extensions of a Randomized Testing Tool," 2009 33rd Annual IEEE International Computer Software and Applications Conference, 2009, pp. 148-153, doi: 10.1109/COMPSAC.2009.29. Posted with permission. © 2009 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

    GenRed: A Tool for Generating and Reducing Object-Oriented Test Cases

    No full text
    An important goal of automatic testing techniques, including random testing is to achieve high code coverage with a minimum set of test cases. To meet this goal, random testing researchers have proposed many techniques to generate test inputs and method call sequences that yield higher code coverage. However, most proposed random testing techniques are only suitable for toy systems, and they achieve low code coverage rates while generating too many unnecessary test cases on large-scale software systems. We propose GENRED, a tool that utilizes three approaches: input on demand creation and coverage-based method selection techniques that enhance Randoop, a state-of-the-art feedback-directed random testing technique, and finally, a sequence-based reduction technique that removes redundant test cases without executing them. We evaluate GENRED as a tool to test four open-source systems. The results show that these techniques achieve branch coverage improvement by 13.7% and prune 51.8% of the test cases without sacrificing code coverage.This is a manuscipt of a proceeding published as H. Jaygarl, K. -S. Lu and C. K. Chang, "GenRed: A Tool for Generating and Reducing Object-Oriented Test Cases," 2010 IEEE 34th Annual Computer Software and Applications Conference, 2010, pp. 127-136, doi: 10.1109/COMPSAC.2010.19. Posted with permission. © 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

    OCAT: Object Capture based Automated Testing

    No full text
    Testing object-oriented (OO) software is critical because OO languages are commonly used in developing modern software systems. In testing OO software, one important and yet challenging problem is to generate desirable object instances for receivers and arguments to achieve high code coverage, such as branch coverage, or find bugs. Our initial empirical findings show that coverage of nearly half of the difficult-to-cover branches that a state-of-the-art test-generation tool cannot cover requires desirable object instances that the tool fails to generate. Generating desirable object instances has been a significant challenge for automated test-generation tools, partly because the search space for such desirable object instances is huge, no matter whether these tools compose method sequences to produce object instances or directly construct object instances. To address this significant challenge, we propose a novel approach called Object Capture based Automated Testing (OCAT). OCAT captures object instances dynamically from program executions (e.g., ones from system testing or real use). These captured object instances assist an existing automated test-generation tool, such as a random testing tool, to achieve higher code coverage. Afterwards, OCAT mutates collected instances, based on observed not-covered branches. We evaluated OCAT on three open source projects, and our empirical results show that OCAT helps a state-of-the-art random testing tool, Randoop, to achieve high branch coverage: on average 68.5%, with 25.5% improved from only 43.0% achieved by Randoop alone. © 2010 ACM

    Professional Tizen application development

    No full text
    Create powerful, marketable applications with Tizen for the smartphone and beyond  Tizen is the only platform designed for multiple device categories that is HTML5-centric and entirely open source. Written by experts in the field, this comprehensive guide includes chapters on both web and native application development, covering subjects such as location and social features, advanced UIs, animations, sensors and multimedia. This book is a comprehensive resource for learning how to develop Tizen web and native applications that are polished, bug-free and ready to sell on a range of smart de

    Incremental Latent Semantic Indexing for Automatic Traceability Link Evolution Management

    No full text
    Maintaining traceability links among software artifacts is particularly important for many software engineering tasks. Even though automatic traceability link recovery tools are successful in identifying the semantic connections among software artifacts produced during software development, no existing traceability link management approach can effectively and automatically deal with software evolution. We propose a technique to automatically manage traceability link evolution and update the links in evolving software. Our novel technique, called incremental Latent Semantic Indexing (iLSI), allows for the fast and low-cost LSI computation for the update of traceability links by analyzing the changes to software artifacts and by reusing the result from the previous LSI computation before the changes. We present our iLSI technique, and describe a complete automatic traceability link evolution management tool, TLEM, that is capable of interactively and quickly updating traceability links in the presence of evolving software artifacts. We report on our empirical evaluation with various experimental studies to assess the performance and usefulness of our approach.This is a manuscript of a proceeding published as H. -Y. Jiang, T. N. Nguyen, I. -X. Chen, H. Jaygarl and C. K. Chang, "Incremental Latent Semantic Indexing for Automatic Traceability Link Evolution Management," 2008 23rd IEEE/ACM International Conference on Automated Software Engineering, 2008, pp. 59-68, doi: 10.1109/ASE.2008.16. Posted with permission. © 2008 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
    corecore